Goto

Collaborating Authors

 Indianapolis



Rage against the machine: a California community rallied against a datacenter – and won

The Guardian > Energy

Monterey Park residents gathered at city hall on 21 January to speak out against the construction of a datacenter. Monterey Park residents gathered at city hall on 21 January to speak out against the construction of a datacenter. Sat 7 Feb 2026 11.00 ESTLast modified on Sat 7 Feb 2026 16.55 EST When a southern California city council proposed building a giant datacenter the size of four football fields last December, five residents vowed to stop it. Through a frenetic word-of-mouth campaign, the small group raised awareness about the proposed facility in Monterey Park, a small city east of Los Angeles known affectionately as the country's first suburban Chinatown. No Data Center Monterey Park organizers - working in tandem with the grassroots racial justice group San Gabriel Valley (SGV) Progressive Action - held a teach-in and rally that drew hundreds of participants, knocked on doors, and distributed flyers on busy streets.


Deepfaking Orson Welles's Mangled Masterpiece

The New Yorker

A.I. re-creations of the "Magnificent Ambersons" stars Joseph Cotten, Agnes Moorehead, Dolores Costello, and Tim Holt. Edward Saatchi first saw "The Magnificent Ambersons," Orson Welles's mangled masterpiece from 1942, when he was twelve years old, in the private screening room of his family's crenellated mansion, in West Sussex. Saatchi's parents had already shown him and his brother "Citizen Kane." But "Ambersons," Welles's follow-up film, about a wealthy Midwestern clan brought low, came with a bewitching backstory: R.K.O. had ripped the movie from the director's hands, slashed forty-three minutes, tacked on a happy ending, and destroyed the excised footage in order to free up vault space, leaving decades' worth of cinephiles to obsess over what might have been. Part of this outcome was the result of studio treachery, but Welles, owing to some combination of hubris and distraction, had let his film slip from his grasp. Saatchi recalled, "Around the family dinner table, that was always such a big topic: How much was Welles responsible for this? Mum was always quite tough on him." Saatchi's father, Maurice, a baron also known as Lord Saatchi, is one of two Iraqi British brothers who founded the advertising firm Saatchi & Saatchi, in 1970, which led their family to become one of the richest in the U.K. Edward's mother, Josephine Hart, who died in 2011, was an Irish writer best known for her erotic thriller "Damage," which was adapted into a film by Louis Malle. Edward, born in 1985, grew up in London and at the sprawling country estate, surrounded by palatial gardens and classical statuary. He described his parents as "movie mad." The actor and Welles biographer Simon Callow, a Saatchi family friend, recalled, "They had a cinema of their own inside the house, and it was a ritual of theirs every week to watch a film together." Aside from old movies, Edward was obsessed with "Star Trek"--especially the Holodeck, a device that conjured simulated 3-D worlds populated by characters who could interact with the members of the Starship Enterprise. That kind of wizardry didn't exist in the real world, at least not yet. But the young prince of the Saatchi castle had faith that someday it would, and that it could bring the original "Ambersons" back from oblivion. "To me, this is the lost holy grail of cinema," Saatchi told me recently, like Charles Foster Kane murmuring about Rosebud. "It just seemed intuitively that there would be some way to undo what had happened."


Fox News AI Newsletter: Blue-collar productivity boom

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .



Unsupervised decoding of encoded reasoning using language model interpretability

Fang, Ching, Marks, Samuel

arXiv.org Artificial Intelligence

As large language models become increasingly capable, there is growing concern that they may develop reasoning processes that are encoded or hidden from human oversight. To investigate whether current interpretability techniques can penetrate such encoded reasoning, we construct a controlled testbed by fine-tuning a reasoning model (DeepSeek-R1-Distill-Llama-70B) to perform chain-of-thought reasoning in ROT-13 encryption while maintaining intelligible English outputs. We evaluate mechanistic interpretability methods--in particular, logit lens analysis--on their ability to decode the model's hidden reasoning process using only internal activations. We show that logit lens can effectively translate encoded reasoning, with accuracy peaking in intermediate-to-late layers. Finally, we develop a fully unsupervised decoding pipeline that combines logit lens with automated paraphrasing, achieving substantial accuracy in reconstructing complete reasoning transcripts from internal model representations. These findings suggest that current mechanistic interpretability techniques may be more robust to simple forms of encoded reasoning than previously understood. Our work provides an initial framework for evaluating interpretability methods against models that reason in non-human-readable formats, contributing to the broader challenge of maintaining oversight over increasingly capable AI systems.


The MICCAI Federated Tumor Segmentation (FeTS) Challenge 2024: Efficient and Robust Aggregation Methods for Federated Learning

Linardos, Akis, Pati, Sarthak, Baid, Ujjwal, Edwards, Brandon, Foley, Patrick, Ta, Kevin, Chung, Verena, Sheller, Micah, Khan, Muhammad Irfan, Jafaritadi, Mojtaba, Kontio, Elina, Khan, Suleiman, Mächler, Leon, Ezhov, Ivan, Shit, Suprosanna, Paetzold, Johannes C., Grimberg, Gustav, Nickel, Manuel A., Naccache, David, Siomos, Vasilis, Passerat-Palmbach, Jonathan, Tarroni, Giacomo, Kim, Daewoon, Klausmann, Leonard L., Shah, Prashant, Menze, Bjoern, Makris, Dimitrios, Bakas, Spyridon

arXiv.org Artificial Intelligence

We present the design and results of the MICCAI Federated Tumor Segmentation (FeTS) Challenge 2024, which focuses on federated learning (FL) for glioma sub-region segmentation in multi-parametric MRI and evaluates new weight aggregation methods aimed at improving robustness and efficiency. Six participating teams were evaluated using a standardized FL setup and a multi-institutional dataset derived from the BraTS glioma benchmark, consisting of 1,251 training cases, 219 validation cases, and 570 hidden test cases with segmentations for enhancing tumor (ET), tumor core (TC), and whole tumor (WT). Teams were ranked using a cumulative scoring system that considered both segmentation performance, measured by Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95), and communication efficiency assessed through the convergence score. A PID-controller-based method achieved the top overall ranking, obtaining mean DSC values of 0.733, 0.761, and 0.751 for ET, TC, and WT, respectively, with corresponding HD95 values of 33.922 mm, 33.623 mm, and 32.309 mm, while also demonstrating the highest communication efficiency with a convergence score of 0.764. These findings advance the state of federated learning for medical imaging, surpassing top-performing methods from previous challenge iterations and highlighting PID controllers as effective mechanisms for stabilizing and optimizing weight aggregation in FL. The challenge code is available at https://github.com/FeTS-AI/Challenge.


From Zero to High-Speed Racing: An Autonomous Racing Stack

Jardali, Hassan, Pushp, Durgakant, Yu, Youwei, Ali, Mahmoud, Mohamed, Ihab S., Murillo-Gonzalez, Alejandro, Coen, Paul D., Khan, Md. Al-Masrur, Pulivendula, Reddy Charan, Park, Saeoul, Zhou, Lingchuan, Liu, Lantao

arXiv.org Artificial Intelligence

High-speed, head-to-head autonomous racing presents substantial technical and logistical challenges, including precise localization, rapid perception, dynamic planning, and real-time control-compounded by limited track access and costly hardware. This paper introduces the Autonomous Race Stack (ARS), developed by the IU Luddy Autonomous Racing team for the Indy Autonomous Challenge (IAC). We present three iterations of our ARS, each validated on different tracks and achieving speeds up to 260 km/h. Our contributions include: (i) the modular architecture and evolution of the ARS across ARS1, ARS2, and ARS3; (ii) a detailed performance evaluation that contrasts control, perception, and estimation across oval and road-course environments; and (iii) the release of a high-speed, multi-sensor dataset collected from oval and road-course tracks. Our findings highlight the unique challenges and insights from real-world high-speed full-scale autonomous racing.


How to Tame Your LLM: Semantic Collapse in Continuous Systems

Wyss, C. M.

arXiv.org Machine Learning

We develop a general theory of semantic dynamics for large language models by formalizing them as Continuous State Machines (CSMs): smooth dynamical systems whose latent manifolds evolve under probabilistic transition operators. The associated transfer operator $P: L^2(M,μ) \to L^2(M,μ)$ encodes the propagation of semantic mass. Under mild regularity assumptions (compactness, ergodicity, bounded Jacobian), $P$ is compact with discrete spectrum. Within this setting, we prove the Semantic Characterization Theorem (SCT): the leading eigenfunctions of $P$ induce finitely many spectral basins of invariant meaning, each definable in an o-minimal structure over $\mathbb{R}$. Thus spectral lumpability and logical tameness coincide. This explains how discrete symbolic semantics can emerge from continuous computation: the continuous activation manifold collapses into a finite, logically interpretable ontology. We further extend the SCT to stochastic and adiabatic (time-inhomogeneous) settings, showing that slowly drifting kernels preserve compactness, spectral coherence, and basin structure.


The Seeds of Scheming: Weakness of Will in the Building Blocks of Agentic Systems

Yang, Robert

arXiv.org Artificial Intelligence

Large language models display a peculiar form of inconsistency: they "know" the correct answer but fail to act on it. In human philosophy, this tension between global judgment and local impulse is called akrasia, or weakness of will. We propose akrasia as a foundational concept for analyzing inconsistency and goal drift in agentic AI systems. To operationalize it, we introduce a preliminary version of the Akrasia Benchmark, currently a structured set of prompting conditions (Baseline [B], Synonym [S], Temporal [T], and Temptation [X]) that measures when a model's local response contradicts its own prior commitments. The benchmark enables quantitative comparison of "self-control" across model families, decoding strategies, and temptation types. Beyond single-model evaluation, we outline how micro-level akrasia may compound into macro-level instability in multi-agent systems that may be interpreted as "scheming" or deliberate misalignment. By reframing inconsistency as weakness of will, this work connects agentic behavior to classical theories of agency and provides an empirical bridge between philosophy, psychology, and the emerging science of agentic AI.